1194 stories
·
0 followers

Project Hail Mary is in theaters—but do the linguistics work?

1 Share

The film adaptation of Andy Weir's novel Project Hail Mary hits general release today, March 20, and it's great—go see it! Though a little light on the science, the movie goes hard on the relationship between schoolteacher Ryland Grace (Ryan Gosling) and an extraterrestrial named Rocky, and it's a ride well worth taking.

But as good as it is, the movie shares a small flaw with the book: Despite having very few things in common, Grace and Rocky learn to communicate with each other extremely quickly. In fact, Grace and Rocky begin conversing in abstracts (concepts like "I like this" and "friendship") in even less time than it takes in the book. Obviously, there are practical narrative reasons for this choice—you can't have a good buddy movie if your buddies can't talk to each other. It's therefore critical to the flow of the story to get that talking happening as soon as possible, but it can still be a little jarring for the technically minded viewer who was hoping for the acquisition of language to be treated with a little more complexity.

And because this is Ars Technica, we're doing the same thing we did when the book came out: talking with Dr. Betty Birner, a former professor of linguistics at NIU (now retired), to pick her brain about cognition, pragmatics, cooperation, and what it would actually take for two divergently evolved sapient beings not just to gesture and pantomime but to truly communicate. And this time, we'll hear from Andy Weir, too. So buckle up, dear readers—things are gonna get nerdy.

A word about spoilers

This article assumes you've read Weir's novel and that you've seen the movie. However, for folks who haven't yet seen the film, I don't think there's much to be spoiled in terms of the language acquisition portions that we're going to discuss—the film covers rather the same ground as the book but in a much more abbreviated way.

Still, if you want to avoid literally all spoilers, skip this article for now—at least until you've been to the theater!

The yawning chasm of "meaning"

Dr. Birner's specific field of study is the science of pragmatics. "Pragmatics has to do with what I intend by what I say and what I mean in a particular context," she explained to Ars on a Zoom call earlier this week. She elaborated by bringing up her (nonexistent) cat—the phrase "my cat" can have a multitude of meanings attached, all of which are inferred by context.

If you know Dr. Birner has a cat, her saying "my cat" could refer to that cat; if you know that she doesn't have a cat but used to, "my cat" could refer to that cat instead, even though the semantics of the phrase "my cat" haven't changed. That's pragmatics, baby!

Pragmatics are particularly relevant to the Grace/Rocky language-acquisition problem because the discipline involves the creation of inferences by the listener about the speaker's mental state and about what specific meanings the speaker implies.

But "meaning" is a fraught word here, too, because ultimately we cannot know for certain the exact meaning being implied by another person because we cannot ever truly peek inside someone else's mind. "We are always making guesses about what our shared context is and what our shared cultural beliefs are, and, indeed, what our shared knowledge as members of the species are," Dr. Birner continued. "And I think of this because of thumbs-up/thumbs-down."

Image of an Eridian giving two thumbs down Two Eridian thumbs up. Or down! Credit: Amazon MGM Studios

"The cognitive linguists George Lakoff and Mark Johnson put out a book, boy, back in the '80s," she said. "They talked about all of language as metaphorically built up from embodiment, our embodied experience, and our senses. So we sense up and down, and then we have this whole metaphorical notion of happy is up, so we have a thumbs up, 'I'm feeling up today. I'm just feeling high. My spirits are lifting.'"

"Or, I can be down in the dumps," she said. "I can be feeling low, my mood is dropping, thumbs down,' and there's this whole metaphorical conception. And I loved the way Project Hail Mary played with that in that Rocky didn't share that. Rocky did not have a metaphor of 'happy is up,' the way Lakoff and Johnson would say we all just do."

I asked Dr. Birner if our "up is good, down is bad" association has a biological basis in our cognition or if it's something that has simply been shaped into a broadly shared metaphor over thousands of years of language use, and she took a moment to answer.

"That's a really good question, and I don't remember whether they deal with that," she said. "But I could imagine it being biological because we start as little helpless things that can't even stand up. And soon we stand up, we get taller, we get smarter, we get better and better the taller we get. I can actually very well imagine a biological basis for it."

The first leap—not math, but truth

Let's focus in on some of the specific linguistic mountains Grace and Rocky would have had to climb. The one that struck me as perhaps the most basic would be starting from pantomime and figuring out the most important thing: the twin concepts of yes and no, and the companion dualities of true/false and equal/not-equal. To me, this feels like the most mandatory of basics.

And here, perhaps, we can fall back on some good ol' Sagan—or at least the movie version of Sagan. Dr. Birner and I (along with my colleague Jennifer Ouellette, who also hung around on the Zoom call) went back and forth for some time, but in the end, no one could really figure out a more straightforward way to demonstrate these concepts than the "primer" scene in 1997's Contact, where the unknown alien signal is shown to contain a small grouping of symbols that appeared to represent addition, along with "equals" and "not equals" sign equivalents.

Image of a scientist standing in front of a TV with math on it Sagan's <em>Contact</em> leads with a quick remedial math lesson. Credit: Warner Bros.

"That's a good way to go about it, with equivalent and not-equivalent," said Dr. Birner. "So at least you get negation, and now you can work on perceptual oppositions—up and down, black and white, loud and soft. I think that would probably be the jumping-off place for yes and no."

Though there are linguistic biases in English and other human languages that might peek through even here—the inherent tie between "positive" (as in agreement) and "positive" (as in "this thing is good and I like it"). Careful aliens would likely want to spend a fair amount of time interrogating this bias—if it's even visible at this point. And it likely wouldn't be, as we haven't built any of those syntactic bridges yet.

Pidgin? Not so fast

Getting those bridges built—going past "yes" and "no" and into some of the other basics that must be established to communicate—is not straightforward. Grace and Rocky benefit from being in a tightly constrained environment with a set of mutual problems to solve; two humans in a similar situation would likely develop a "pidgin"—an ad-hoc working language cobbled together out of components of both speakers' languages.

But as Dr. Birner points out, true pidgin here is impossible because neither Grace nor Rocky is capable of actually producing the sounds required to speak the other's language in the first place. "They don't actually develop a pidgin," she said. "They each have to learn the other's language receptively, not productively."

Image showing an Eridian and a human collaborating. Rocky (left) and Grace (right), learning each other's languages. Credit: Amazon MGM Studios

"Which is great," she went on, "because when kids acquire language, it's sort of a truism that reception precedes production. Every kid is going to understand more than they're producing. Necessarily! You can't produce what you don't understand yet. So it makes the problem a little easier for Grace and Rocky—they don't have to produce each other's language, just understand it."

Who is even there?

Grace and Rocky are lucky in that both humans and Eridians are ultimately extremely similar in their cognition and linguistics, even if their vocalizations aren't alike. This means a lot of the mandatory requirements for conversation as we understand them are already present.

"If I encounter Rocky, I need to know, does he have a mind?" she posited. "Does he have what we call a theory of mind? Does he have a mind like mine? And does he understand that I have a mind like his, but separate? Does he understand that I can believe different things from what he believes? Can I have false beliefs? That's all a prerequisite for communicating at all. If your mind and my mind had all the exact same stuff in it, there'd be no need to communicate.

"H.P. Grice said that communication doesn't happen without the assumption that both parties are being cooperative," she said. The word "cooperative" here doesn't necessarily mean that both parties are copacetic—Dr. Birner pointed out that even when people are fighting, they tend to still be cooperatively communicating. There are rules to the interaction that must be followed if one party intends to impart meaning to the other.

Beyond adherence to the cooperative principle, another bedrock of communication is the notion of symbols, the understanding that a word can represent not just an abstract concept but can actually stand in for a thing. "I can use the word mug," explained Dr. Birner, holding up a mug, "and mean this. And you understand what I mean, and I don't have to show you the mug every single time."

Also on the "mandatory" list is an understanding of the concept of displacement, which Dr. Birner attributes to the researcher Charles F. Hockett. "Displacement has long been said to be solely human, though not everyone agrees with that. It's the ability to refer to something that is distant in time or space. I can tell you that I had a bagel this morning, even though I'm not having it right now and it's not present right here. I had it elsewhere and I had it earlier," she said.

She continued: "There's this wonderful article, 1979 by Michael Reddy, called 'The Conduit Metaphor,' where he says that we think in metaphors. And the metaphor he's talking about is that language is a conduit, and we really just pass ideas from my brain to yours. And he says it's a false metaphor. It's clearly not true that that's what happens, but we talk about it as though it does. 'I didn't catch your meaning,' or 'Give that to me again.' We talk as though this is a thing we literally convey, and of course we don't convey meanings. Reddy argues that the vast majority of human communication is actually miscommunication, but so trivially that we never notice."

By way of example, she referenced her nonexistent cat again. "If I mentioned my cat, Sammy, well, you'll have some mental image of a cat," she said. "It almost certainly isn't remotely like Sammy, but it doesn't matter. I don't need to explain everything about Sammy. If I did, the conversation would grind to a halt and you'd never interview me again. Also, I'd be violating the cooperative principle because I would be saying too much for the current context."

Math, the universal language?

It is a common trope in science fiction—and one brought up more than once in the comments on our last article on this subject—that "math is the only universal language." It's a fun, pithy saying that perhaps makes mathematicians feel good about their dusty chalkboards, but at least from my knothole, it's a false generalization because the language in which one does one's mathematics must be settled before any mathing can happen.

"I'm not sure that even is true on Earth," said Dr. Birner about the notion of math as universal grammar. "The concept of zero hasn't always been around, and how much math can you do without zero? There are languages that count, "One, two, three, many," and that's it. And those are human languages. So to say, 'Math is a universal language,' I'm already not totally on board there."

Image of Ryland Grace staring at a microscope Gosling as Ryland Grace, employing math to save the world. Credit: Jonathan Olley / Amazon MGM Studios

"I think math would help, but I don't think it would get them terribly far because they need the notion of objects. They need the notion of the semiotic function, that things stand for other things." She paused pensively, then went on. "And once they've got that, that there are discrete objects and we both think of the same things as discrete objects, then we can talk about counting those objects and now we're off and running."

Whole-object notion is another oft-overlooked component here—often referred to as the "gavagai problem."

"You're pointing to a rabbit, and you say, 'gavagai!'" said Dr. Birner. "Well, does that mean 'rabbit?' Does that mean 'fur?' Does that mean 'ears?' Does that mean, 'hey look?'"

"Quine's notion is that we default to a whole object. Well, does what counts as a whole object for me count as a whole object for you? Does every conceivable culture have discrete borders on objects?"

The author speaks on human-Eridian similarities

Fortunately for Grace and Rocky, humans and Eridians do have all these things in common because in the universe of Project Hail Mary, the species share a common ancestor.

"Within the fictional context of this story," explained Andy Weir to Ars in an interview, "the natural evolution of life began on planet Adrian in the Tau Ceti system. Then what we can call primordial Astrophage, like an ancestor of Astrophage, caused a panspermia event. It just kind of emanated out from the system and ended up seeding just a few planets."

Image of Andy Weir on the bridge of the Hail Mary Author Andy Weir on the <em>Hail Mary</em> bridge set during filming (Weir was a producer on the movie). Credit: Jonathan Olley / Amazon MGM Studios

"That panspermia event was about four and a half billion years ago. It seeded Earth with life. It seeded 40 Eridani—or rather, Erid with life, and maybe others as well. That means everything within a certain radius of Tau Ceti has a decent chance of having been infected with life, and all of that life is related. That's why Eridian cells and Astrophage and human cells all have mitochondria, the powerhouse of the cell, and ribosomes, and DNA or RNA, and so on."

Weir notes that he worked through a number of the same linguistic issues that Dr. Birner and I raised as part of the story-generation process.

"Let's say you have intelligent life on the planet," he said. "What do you need? What does that species need to have to reach the point where they're able to make spacecraft and fly around in space? Well, first off, you have to be a tribal thing. You can't be loners. You can't be like bears and tigers that don't communicate with each other. You have to have the sense of a community or a tribe or a group or a gathering so that you can collaborate because you can specialize and do all these things. You need that."

"Number two, you need language. One way or another, stuff from my brain has to get into your brain," he said, echoing Dr. Birner's note about Reddy's conduit metaphor paper.

"Number three is you need empathy and compassion. A collection of beings altogether doesn't work unless they actually are willing to take care of each other. And that's not just found in humans—it's found in primates. It's found in wolf packs. It's found in ants. It's like any collectivized species has to have that trait."

"You need to have compassion, empathy, which means putting yourself in somebody else's situation. Compassion, empathy, language, a decent amount of intelligence, a tribal instinct, a group instinct, a society kind of building instinct," he said. "You must, I believe, have all of those things in order to be able to make a spaceship. Any species that's lacking any one of those won't be able to do it. So any alien you meet in space is going to have all of those traits. The Friendly Great Filter is that any aliens you meet, I believe, have to have this concept of society, cooperation, empathy, compassion, collaboration, and so on."

Ryan Gosling on the set of PHM being hot Sexy hero nerd Ryan Gosling, saving the world with his biceps and kind eyes. Credit: Jonathan Olley / Amazon MGM Studios

I'm here for Weir's explanation—it works within the context of the science fiction universe we're being presented, and Rocky and Grace need to be able to talk to each other or we don't have a book (or a film!). But does it ring true under scrutiny? After all, even here on Earth, there is a wealth of problem-solving, tool-using creatures much more closely related than humans and Eridians with vastly different cognitive toolkits. Cephalopods (with distributed nervous systems and pseudo-autonomous arms), corvids, and cetaceans all have their own evolutionary approaches to communication.

"There's a point in the book where Grace says that Rocky is less closely related to him than the trees in his backyard," said Dr. Birner. "Yes, we are family, but the tree in the backyard is more closely related. And I thought, well, let's talk about that tree. Trees communicate. I don't think trees have language. They don't have a displacement, they don't have syntax, they don't have discrete units that can be put together in various ways or replace this unit with that and you've said something different."

"That's all language stuff, but they communicate," she said. "One tree will put out a chemical through its roots that warns another tree or all the local trees about something going on. OK, it's communicative. Is it intentional? I would say no—your mileage may vary—but once you say that I'm closer to that tree biologically than I am to Rocky, even though all three of us came from that same panspermia event, I don't think you can lean on that for saying why it's so easy to communicate with Rocky."

Here, Ars' Jennifer Ouellette made an important point. "Rocky is basically a rock," she said. "He's not a human form, and that's going to affect how a language, if there is one, evolves in that species—and it's really going to impact how they communicate."

"Yes, embodiment is a big deal in communications," replied Dr. Birner, returning to the subject she'd brought up earlier, that the nature of our flesh-prisons inherently shapes not just how we experience the world but how we communicate. Our physical forms are the product of evolutionary pressures—they are the results of the inevitable, inscrutable dialogue between environment and organism. And the evolutionary pressures faced by Homo sapiens on Earth are vastly different from the evolutionary pressures faced by Eridians on Erid, and that same dialog on Erid led to vastly different outcomes.

"You may have heard of this," she said, "Thomas Nagel and his wonderful paper 'What Is It Like To Be a Bat?'" (I had not, but Ouellette had—in fact, she'd heard Nagel deliver a lecture on the paper.)

"I can try to imagine what it's like to be a bat," said Dr. Birner, "feeling the air beneath my wings and the sonar, but I'm imagining what it's like to be me being a bat. I can't imagine what it's like for that bat. And to bring it back to even within human beings, I don't know what it feels like to be Lee or Jennifer. I just don't. And even if, Jennifer, we just met, you could tell me, 'Oh, this is how old I am, and I have N kids for some value of N or N partner or whatever, a dog. I love to run in the morning. I have bagels for breakfast. You could go on and on and on, and I will never know what it is like to be Jennifer. And it's such an interesting concept because it really messes with the notion of just what you were saying, Lee: How do you communicate with a mind you can never enter?"

Image of an Eridian in profile Rocky in all his beautiful glory. Credit: Amazon MGM studios

"And that's where I think that anthropic principle really cheats in most science fiction," she finished up, referring to the principle that we're all here to see a story, and certain things about the aliens need to be true for the story to work. "Because you would have to spend so much of your book describing what it is like to be Rocky—which, again, I'd read that book!—but it wouldn't be a book about saving Erid and saving Earth because you'd need 500 pages just of what does it feel like to be an Eridian. And so what you end up with in both the book and the movie is it seems to feel a lot like being a human, except you've got a harder shell."

Real first contact?

Given all of these factors, including and especially the idea that we can probably rely on a spacefaring technological species to have some of the same cognitive touchstones as humanity but also that "probably" is not "definitely," I asked Dr. Birner how she might proceed if she suddenly found herself tapped to communicate with an alien and how she might know that meaning has been achieved.

She mentioned that for this scenario in Arrival—a film for which she did some consulting—the characters start with trying to establish names, but that naming might not be the right tack.

"There's so much that has to come before that," she said. "I like what Grace did, and I think I would probably do the same thing. Walk to the other end of the cage, see if it follows me. Assuming mobility—a big assumption, but let's assume mobility and assume no obvious face. Let's assume a Rocky-like creature. Will they follow me? That would establish some kind of cooperativity and theory of mind, that we now both think we both have minds."

Image showing shadowy figures in front of a smoky portal 2016's <em>Arrival</em> goes with a "names-first" approach to linguistic acquisition. Credit: Paramount

"From there, presence of mind and friendly, mutual intent. At that point we go to the semiotic function—do you label things? Do you have things that stand for other things?"

"Grace does this with a clock," I pointed out, "which works out to be probably the smartest thing he could have done because Rocky also has a clock handy and recognizes that that's what it is."

"Yeah, Rocky understands that it's a clock because it's got these little things that go around from image to incomprehensible image," she said, referring to the spinning hands on the clock and Rocky's apparent reasoning process. "I don't know. Again, I'm saying you have to try to think 'what is it like to be Rocky?' Would you know that that's a clock? Maybe you don't measure time with numbers. Are there different measuring systems, some of which use math and some of which don't?"

We pointed out that it's narratively convenient that Eridians seem to use a written number system that employs place value and has a distinct base—and how even on Earth, that's not a given. Many counting methodologies both ancient and modern are not place value systems—how much more complicated the conversation would have been had Grace brought out a clock using Roman numerals instead of Arabic!

Ultimately, as Ouellette then pointed out, any spacefaring entity must have some kind of working concept of space and time (either as the modern linked notion of "space-time" or in the old Newtonian separate mode)—though Weir made the fun choice of having Eridians lack any understanding of relativity.

Friendly aliens

The most dangerous thing about communicating with aliens this way isn't mistaking a word or two—it's the more fundamental problem of what happens to third- and fourth-order assumptions when the foundations those assumptions are built on aren't quite right. Sure, Grace and Rocky can agree that they are "friends," but how do you explain "friend"?

"To be someone's friend can mean a million things," said Dr. Birner. "I have my best friend since high school. I consider you a friend," she said, pointing at me through the screen, "and we've talked three times. My daughter, who's now 35, has turned into my friend. What does that mean?"

Indeed, the notion of "friend" is a rough one—it's fundamental to human interaction, and as such, it carries with it a huge number of (sometimes contradictory) behavioral expectations. When you're explaining "friends" to an alien, how do you paint it? That you and the alien have shared interests and should therefore work together? That you are genuinely interested in the alien's well-being? That you'd make sacrifices for them? That you'd expect them to help you haul furniture when you move?

And what assumptions might you make about the alien's behavior once you'd declared each other "friends"? That they would make sacrifices for you? What if for the alien, the concept they've settled on for "friendship" means they'll pull your limbs off when the adventure is over because that's what friends do in their culture?

"You need societal grouping," I supplied, "but you don't necessarily need friends."

"Absolutely," she said. "And now I'm going to another work from 1982, Maltz and Borker, who looked at kids on the playground, and at that time—I think it's changed a lot, it's been 40-some years!—but at that time, they saw that little girls had a horizontal set of relationships. It was all friendship-based and secrets-based, and you have your best friend and then your next best friends. And little boys had a hierarchy, and your whole goal was to get higher in the hierarchy by insulting the kids above you and whacking them and try to be king of the hill."

"Get the conch," I joked unhelpfully.

"Yeah, exactly—get the conch. Again, cultural knowledge."

Thumbs up! Or down!

Realistically, it's hard to see two creatures like Grace and Rocky so quickly managing to progress from pointing at objects to communicating about complicated emotional abstracts. Dr. Birner readily conceded that it would be easiest for two beings in a Grace-n-Rocky situation to build a small shared technical vocabulary out of pantomime, pointing at things, and other easily demonstrable physical concepts (and, indeed, this is how they start their communication). But the leap from demonstrables to abstracts is fraught.

Even the fact that Eridian language seems to more or less be human language wearing different pants doesn't necessarily help us much. It makes it easier for the mechanics of communication to occur—Grace and Rocky easily grok that each other's language has a definable syntax, expressed through the arrangement of chunked human words or the continuous sets of overlapping Eridian tones—but divining the true biologically freighted meaning of those messages grows increasingly uncertain the farther they get from physical demonstrations.

We are all slaves to our perceptions, and our languages and the meanings we aim at rest upon biological and sociological foundations that would be terribly difficult to convey to an alien with just a few weeks of focused effort.

But again, that's not necessarily good buddy-movie storytelling, is it? It's perhaps realistic, but it would make for a very different tale than the star-saving adventures of Grace and Rocky.

A bit of Grace and Rocky.

Which, by the way, are excellent. You've stuck with me through four thousand words of linguistic bloviating—and I have much more, with plenty more details, in the first piece from a few years ago! And if I can count on your attention for just a few more words, it would be to entreat you to go see Project Hail Mary. It is a fantastic film with a charming duo at its core—and even if the linguistics invite a lot of critique, it's all coming from a place of love.

Even if Rocky doesn't quite understand how hugs work. It's OK. He figures them out.

Read full article

Comments



Read the whole story
Share this story
Delete

Learning to read C++ compiler errors: Illegal use of -> when there is no -> in sight

2 Shares

A customer reported a problem with a system header file. When they included ole2.h, the compiler reported an error in oaidl.h:

    MIDL_INTERFACE("3127CA40-446E-11CE-8135-00AA004BB851")
    IErrorLog : public IUnknown
    {
    public:
        virtual HRESULT STDMETHODCALLTYPE AddError( // error here
            /* [in] */ __RPC__in LPCOLESTR pszPropName,
            /* [in] */ __RPC__in EXCEPINFO *pExcepInfo) = 0;
        
    };

The error message is

oaidl.h(5457,43): error C3927: '->': trailing return type is not allowed after a non-function declarator
oaidl.h(5457,43): error C3613: missing return type after '->' ('int' assumed)
oaidl.h(5457,43): error C3646: 'Log': unknown override specifier
oaidl.h(5457,43): error C2275: 'LPCOLESTR': expected an expression instead of a type
oaidl.h(5457,43): error C2146: syntax error: missing ')' before identifier 'pszPropName'
oaidl.h(5459,60): error C2238: unexpected token(s) preceding ';'

The compiler is seeing ghosts: It’s complaining about things that aren’t there, like -> and Log.

When you see the compiler reporting errors about things that aren’t in the code, you should suspect a macro, because macros can insert characters into code.

In this case, I suspected that there is a macro called AddError whose expansion includes the token ->.

The customer reported that they had no such macro.

I asked them to generate a preprocessor file for the code that isn’t compiling. That way, we can see what is being produced by the preprocessor before it goes into the part of the compiler that is complaining about the illegal use of ->. Is there really no -> there?

The customer reported back that, oops, they did indeed have a macro called AddError. Disabling the macro fixed the problem.

The compiler can at times be obtuse with its error messages, but as far as I know, it isn’t malicious. If it complains about a misused ->, then there is probably a -> that is being misused.

The post Learning to read C++ compiler errors: Illegal use of <TT>-></TT> when there is no <TT>-></TT> in sight appeared first on The Old New Thing.

Read the whole story
Share this story
Delete

Will 'AI-Assisted' Journalists Bring Errors and Retractions?

1 Share
Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal. "AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said... Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers. While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..." "Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava. For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently.... Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue. Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not... Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave... But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Research finds AI users scarily willing to "surrender" their cognition to LLMs

1 Share

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

Just ask the answer machine

In "Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender," researchers from the University of Pennsylvania sought to build on existing scholarship that outlines two broad categories of decision-making: one shaped by "fast, intuitive, and affective processing" (System 1); and one shaped by "slow, deliberative, and analytical reasoning" (System 2). The onset of AI systems, the researchers argue, has created a new, third category of "artificial cognition" in which decisions are driven by "external, automated, data-driven reasoning originating from algorithmic systems rather than the human mind."

In the past, people have often used tools from calculators to GPS systems for a kind of task-specific "cognitive offloading," strategically delegating some jobs to reliable automated algorithms while using their own internal reasoning to oversee and evaluate the results. But the researchers argue that AI systems have given rise to a categorically different form of "cognitive surrender" in which users provide "minimal internal engagement" and accept an AI's reasoning wholesale without oversight or verification. This "uncritical abdication of reasoning itself" is particularly common when an LLM's output is "delivered fluently, confidently, or with minimal friction," they point out.

To measure the prevalence and effect of this kind of cognitive surrender to AI, the researchers performed a number of studies based on Cognitive Reflection Tests. These tests are designed to elicit incorrect answers from participants that default to "intuitive" (System 1) thought processes, but to be relatively simple to answer for those who use more "deliberative" (System 2) thought processes.

Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not. Credit: Shaw and Nave

For their experiments, the researchers provided participants with optional access to an LLM chatbot that had been modified to randomly provide inaccurate answers to the CRT questions about half the time (and accurate answers the other half). The researchers hypothesized that users who frequently consulted the chatbot would let those incorrect answers "override intuitive and deliberative processes," hurting their overall performance and highlighting the dangers of cognitive surrender.

In one study, an experimental group with access to this modified AI consulted it for help with about 50 percent of the presented CRT problems. When the AI was accurate, those AI users accepted its reasoning about 93 percent of the time. When the AI was randomly "faulty," though, those users still accepted the AI reasoning a lower (but still high) 80 percent of the time, showing that the mere presence of the AI frequently "displaced internal reasoning," according to the researchers.

Unsurprisingly, the AI-using experimental group did much better than the "brain-only" control group when the AI provided accurate answers, and much worse than the control when the AI was inaccurate. Significantly, though, the group that used AI scored 11.7 percent higher on a measure of their own confidence in their answers, even though the LLM provided wrong answers half the time.

In another study, adding incentives (in the form of small payments) and immediate feedback for correct answers increased the likelihood that participants successfully overruled the faulty AI by 19 percentage points relative to the baseline, showing that salient consequences can encourage AI users to spend extra time verifying responses. But adding time pressures in the form of a 30-second timer decreased that tendency to correct the faulty AI by 12 percentage points, suggesting to the researchers that "when decision time is scarce, the internal monitor detecting conflict and recruiting deliberation is less likely to trigger."

"Lowering the threshold for scrutiny"

Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write.

Subjects with high trust in AI were more likely to be misled by faulty responses, while those with high "Fluid IQ" were less likely to be misled by the AI. Credit: Shaw and Nave

These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.

Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest.

"As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender."

In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.

Read full article

Comments



Read the whole story
Share this story
Delete

How can I use Read­Directory­ChangesW to know when someone is copying a file out of the directory?

2 Shares

A customer was using Read­Directory­ChangesW in the hopes of receiving a notification when a file was copied. They found that when a file was copied, they received a FILE_NOTIFY_CHANGE_LAST_ACCESS, but only once an hour. And they also got that notification even for operations unrelated to file copying.

Recall that Read­Directory­ChangesW and Find­First­Change­Notification are for detecting changes to information that would appear in a directory listing. Your program can perform a Find­First­File/Find­Next­File to cache a directory listing, and then use Read­Directory­ChangesW or Find­First­Change­Notification to be notified that the directory listing has changed, and you have to invalidate your cache.

But there are a lot of operations that don’t affect a directory listing.

For example, a program could open a file in the directory with last access time updates suppressed. (Or the volume might have last access time updates suppressed globally.) There is no change to the directory listing, so no event is signaled.

Functions like Read­Directory­ChangesW and Find­First­Change­Notification functions operate at the file system level, so the fundamental operations they see are things like “read” and “write”. They don’t know why somebody is reading or writing. All they know is that it’s happening.

If you are a video rental store, you can see that somebody rented a documentary about pigs. But you don’t know why they rented that movie. Maybe they’re doing a school report. Maybe they’re trying to make illegal copies of pig movies. Or maybe they simply like pigs.

If you are the file system, you see that somebody opened a file for reading and read the entire contents. Maybe they are loading the file into Notepad so they can edit it. Or maybe they are copying the file. You don’t know. Related: If you let people read a file, then they can copy it.

In theory, you could check, when a file is closed, whether all the write operations collectively combine to form file contents that match a collective set of read operations from another file. Or you could hash the file to see if it matches the hash of any other file.¹ But these extra steps would get expensive very quickly.

Indeed, we found during user research that a common way for users to copy files is to load them into an application, and then use Save As to save a copy somewhere else. In many cases, this “copy” is not byte-for-byte identical to the original, although it is functionally identical. (For example, it might have a different value for Total editing time.) Therefore, detecting copying by comparing file hashes is not always successful.²

If your goal is to detect files being “copied” (however you choose to define it), you’ll have to operate at another level. For example, you could use various data classification technologies to attach security labels to files and let the data classification software do the work of preventing files from crossing security levels. These technologies usually work best in conjunction with programs that have been updated to understand and enforce these data classification labels. (My guess is that they also use heuristics to detect and classify usage by legacy programs.)

¹ It would also generate false positives for files that are identical merely by coincidence. For example, every empty file would be flagged as a copy of every other empty file.

Windows 2000 Server had a feature called Single Instance Store which looked for identical files, but it operated only when the system was idle. It didn’t run during the copy operation. This feature was subsequently deprecated in favor of Data Deduplication, which looks both for identical files as well as identical blocks of files. Again, Data Deduplication runs during system idle time. It doesn’t run during the copy operation. The duplicate is detected only after the fact. (Note the terminology: It is a “duplicate” file, not a “copy”. Two files could be identical without one being a copy of the other.)

² And besides, even if the load-and-save method produces byte-for-byte identical files, somebody who wanted to avoid detection would just make a meaningless change to the document before saving it.

The post How can I use <CODE>Read­Directory­ChangesW</CODE> to know when someone is copying a file out of the directory? appeared first on The Old New Thing.

Read the whole story
Share this story
Delete

EPA Flags Microplastics, Pharmaceuticals As Contaminants In Drinking Water

1 Share
An anonymous reader quotes a report from NPR: Responding to public health concerns about microplastics and pharmaceuticals in the nation's drinking water, the Trump administration for the first time has placed them on a draft list of contaminants maintained by the Environmental Protection Agency. The EPA announced the move Thursday, touting it as a "historic step" for the Make America Healthy Again, or MAHA, movement, which often raises concerns about toxic chemicals and plastic pollution in our food and environment. Also Thursday, the Department of Health and Human Services announced a $144 million initiative, called STOMP, to develop tools to measure and monitor microplastics in drinking water and in a later stage, to remove them. The Safe Drinking Water Act requires the EPA to publish an updated version of its Contaminant Candidate List every five years. This is the sixth iteration of the list. Microplastics and pharmaceuticals appear in the draft of the upcoming list, alongside per- and polyfluoroalkyl substances, or PFAS, and dozens of other chemicals and microbes. Their inclusion on the list gives local regulators a tool to evaluate risks in their water supply, the EPA says, and it can set the stage for more research and regulatory action -- but doesn't actually guarantee that will happen.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete
Next Page of Stories